Goto

Collaborating Authors

 Gulf of Suez




U.S. veteran says he faces retribution from Trump officials for protesting his wrongful arrest

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. U.S. veteran says he faces retribution from Trump officials for protesting his wrongful arrest George Retes Jr. is seen in 2020 in Baghdad. The U.S. veteran wrote about what he says was his unlawful arrest during the Glass House ICE raid in July. He says the Department of Homeland Security is now spreading falsehoods against him for speaking out. This is read by an automated voice.


Disjunctive and Conjunctive Normal Form Explanations of Clusters Using Auxiliary Information

Downey, Robert F., Ravi, S. S.

arXiv.org Artificial Intelligence

We consider generating post-hoc explanations of clusters generated from various datasets using auxiliary information which was not used by clustering algorithms. Following terminology used in previous work, we refer to the auxiliary information as tags. Our focus is on two forms of explanations, namely disjunctive form (where the explanation for a cluster consists of a set of tags) and a two-clause conjunctive normal form (CNF) explanation (where the explanation consists of two sets of tags, combined through the AND operator). We use integer linear programming (ILP) as well as heuristic methods to generate these explanations. We experiment with a variety of datasets and discuss the insights obtained from our explanations. We also present experimental results regarding the scalability of our explanation methods.


Flexible and Efficient Probabilistic PDE Solvers through Gaussian Markov Random Fields

Weiland, Tim, Pförtner, Marvin, Hennig, Philipp

arXiv.org Artificial Intelligence

Mechanistic knowledge about the physical world is virtually always expressed via partial differential equations (PDEs). Recently, there has been a surge of interest in probabilistic PDE solvers -- Bayesian statistical models mostly based on Gaussian process (GP) priors which seamlessly combine empirical measurements and mechanistic knowledge. As such, they quantify uncertainties arising from e.g. noisy or missing data, unknown PDE parameters or discretization error by design. Prior work has established connections to classical PDE solvers and provided solid theoretical guarantees. However, scaling such methods to large-scale problems remains a fundamental challenge primarily due to dense covariance matrices. Our approach addresses the scalability issues by leveraging the Markov property of many commonly used GP priors. It has been shown that such priors are solutions to stochastic PDEs (SPDEs) which when discretized allow for highly efficient GP regression through sparse linear algebra. In this work, we show how to leverage this prior class to make probabilistic PDE solvers practical, even for large-scale nonlinear PDEs, through greatly accelerated inference mechanisms. Additionally, our approach also allows for flexible and physically meaningful priors beyond what can be modeled with covariance functions. Experiments confirm substantial speedups and accelerated convergence of our physics-informed priors in nonlinear settings.


Refining Minimax Regret for Unsupervised Environment Design

Beukman, Michael, Coward, Samuel, Matthews, Michael, Fellows, Mattie, Jiang, Minqi, Dennis, Michael, Foerster, Jakob

arXiv.org Artificial Intelligence

In unsupervised environment design, reinforcement learning agents are trained on environment configurations (levels) generated by an adversary that maximises some objective. Regret is a commonly used objective that theoretically results in a minimax regret (MMR) policy with desirable robustness guarantees; in particular, the agent's maximum regret is bounded. However, once the agent reaches this regret bound on all levels, the adversary will only sample levels where regret cannot be further reduced. Although there are possible performance improvements to be made outside of these regret-maximising levels, learning stagnates. In this work, we introduce Bayesian level-perfect MMR (BLP), a refinement of the minimax regret objective that overcomes this limitation. We formally show that solving for this objective results in a subset of MMR policies, and that BLP policies act consistently with a Perfect Bayesian policy over all levels. We further introduce an algorithm, ReMiDi, that results in a BLP policy at convergence. We empirically demonstrate that training on levels from a minimax regret adversary causes learning to prematurely stagnate, but that ReMiDi continues learning.


An Open-Source Gloss-Based Baseline for Spoken to Signed Language Translation

Moryossef, Amit, Müller, Mathias, Göhring, Anne, Jiang, Zifan, Goldberg, Yoav, Ebling, Sarah

arXiv.org Artificial Intelligence

Sign language translation systems are complex and require many components. As a result, it is very hard to compare methods across publications. We present an open-source implementation of a text-to-gloss-to-pose-to-video pipeline approach, demonstrating conversion from German to Swiss German Sign Language, French to French Sign Language of Switzerland, and Italian to Italian Sign Language of Switzerland. We propose three different components for the text-to-gloss translation: a lemmatizer, a rule-based word reordering and dropping component, and a neural machine translation system. Gloss-to-pose conversion occurs using data from a lexicon for three different signed languages, with skeletal poses extracted from videos. To generate a sentence, the text-to-gloss system is first run, and the pose representations of the resulting signs are stitched together.


LXT in the field: October 2022 conference round up

#artificialintelligence

Our fall conference season is in full swing and we're excited to connect in the coming weeks with even more organizations that are innovating with AI. Here is where you'll find our AI data experts in October: We're thrilled to sponsor the Big Data & AI conference, right in our own backyard! Our AI data experts Phil Hall and Jodie Ruby will be on hand to connect with organizations that are using AI to drive efficiency and competitive advantage. Learn more about our data collection and annotation services, and how we build reliable AI data pipelines for companies of all sizes across a wide range of use cases. Visit our booth #D3 and enter for a chance to win a Bluetooth speaker system!


ArgLegalSumm: Improving Abstractive Summarization of Legal Documents with Argument Mining

Elaraby, Mohamed, Litman, Diane

arXiv.org Artificial Intelligence

A challenging task when generating summaries of legal documents is the ability to address their argumentative nature. We introduce a simple technique to capture the argumentative structure of legal documents by integrating argument role labeling into the summarization process. Experiments with pretrained language models show that our proposed Figure 1: Overview of our approach.


The Second Conversational Intelligence Challenge (ConvAI2)

Dinan, Emily, Logacheva, Varvara, Malykh, Valentin, Miller, Alexander, Shuster, Kurt, Urbanek, Jack, Kiela, Douwe, Szlam, Arthur, Serban, Iulian, Lowe, Ryan, Prabhumoye, Shrimai, Black, Alan W, Rudnicky, Alexander, Williams, Jason, Pineau, Joelle, Burtsev, Mikhail, Weston, Jason

arXiv.org Artificial Intelligence

We describe the setting and results of the ConvAI2 NeurIPS competition that aims to further the state-of-the-art in open-domain chatbots. Some key takeaways from the competition are: (i) pretrained Transformer variants are currently the best performing models on this task, (ii) but to improve performance on multi-turn conversations with humans, future systems must go beyond single word metrics like perplexity to measure the performance across sequences of utterances (conversations) in terms of repetition, consistency and balance of dialogue acts (e.g. The Conversational Intelligence Challenge aims at finding approaches to creating highquality dialogue agents capable of meaningful open domain conversation. Today, the progress in the field is significantly hampered by the absence of established benchmark tasks for non-goal-oriented dialogue systems (chatbots) and solid evaluation criteria for automatic assessment of dialogue quality. The aim of this competition was therefore to establish a concrete scenario for testing chatbots that aim to engage humans, and become a standard evaluation tool in order to make such systems directly comparable, including open source datasets, evaluation code (both automatic evaluations and code to run the human evaluation on Mechanical Turk), model baselines and the winning model itself. Taking into account the results of the previous edition, this year we improved the task, the evaluation process, and the human conversationalists' experience. We did this in part by making the setup simpler for the competitors, and in part by making the conversations more engaging for humans. We provided a dataset from the beginning, Persona-Chat, whose training set consists of conversations between crowdworkers who were randomly paired and asked to act the part of a given provided persona (randomly assigned, and created by another set of crowdworkers). The paired workers were asked to chat naturally and to get to know each other during the conversation. This produces interesting and engaging conversations that learning agents can try to mimic.